151 research outputs found

    About Pyramid Structure in Convolutional Neural Networks

    Full text link
    Deep convolutional neural networks (CNN) brought revolution without any doubt to various challenging tasks, mainly in computer vision. However, their model designing still requires attention to reduce number of learnable parameters, with no meaningful reduction in performance. In this paper we investigate to what extend CNN may take advantage of pyramid structure typical of biological neurons. A generalized statement over convolutional layers from input till fully connected layer is introduced that helps further in understanding and designing a successful deep network. It reduces ambiguity, number of parameters, and their size on disk without degrading overall accuracy. Performance are shown on state-of-the-art models for MNIST, Cifar-10, Cifar-100, and ImageNet-12 datasets. Despite more than 80% reduction in parameters for Caffe_LENET, challenging results are obtained. Further, despite 10-20% reduction in training data along with 10-40% reduction in parameters for AlexNet model and its variations, competitive results are achieved when compared to similar well-engineered deeper architectures.Comment: Published in 2016 International Joint Conference on Neural Networks (IJCNN

    Triadic Motifs in the Partitioned World Trade Web

    Get PDF
    AbstractOne of the crucial aspects of the Internet of Things that influences the effectiveness of communication among devices is the communication model, for which no universal solution exists. The actual interaction pattern can in general be represented as a directed graph, whose nodes represent the "Things" and whose directed edges represent the sent messages. Frequent patterns can identify channels or infrastructures to be strengthened and can help in choosing the most suitable message routing schema or network protocol. In general, frequent patterns have been called motifs and overrepresented motifs have been recognized to be the low-level building blocks of networks and to be useful to explain many of their properties, playing a relevant role in determining their dynamic and evolution. In this paper triadic motifs are found first partitioning a network by strength of connections and then analyzing the partitions separately. The case study is the World Trade Web (WTW), that is the directed graph connecting world Countries with trade relationships, with the aim of finding its topological characterization in terms of motifs and isolating the key factors underlying its evolution. The WTW has been split based on the weights of the graph to highlight structural differences between the big players in terms of volumes of trade and the rest of the world. As test case, the period 2003-2010 has been analyzed, to show the structural effect of the economical crisis in the year 2007

    A Rough Set Approach to Spatio-temporal Outlier Detection

    Get PDF
    Abstract. Detecting outliers which are grossly different from or inconsistent with the remaining spatio-temporal dataset is a major challenge in real-world knowledge discovery and data mining applications. In this paper, we deal with the outlier detection problem in spatio-temporal data and we describe a rough set approach that finds the top outliers in an unlabeled spatio-temporal dataset. The proposed method, called Rough Outlier Set Extraction (ROSE), relies on a rough set theoretic representation of the outlier set using the rough set approximations, i.e. lower and upper approximations. It is also introduced a new set, called Kernel set, a representative subset of the original dataset, significative to outlier detection. Experimental results on real world datasets demonstrate its superiority over results obtained by various clustering algorithms. It is also shown that the kernel set is able to detect the same outliers set but with such less computational time

    Self-similarity and Points of Interest in Textured Images

    No full text
    We propose the application of symmetry for texture classification. First we propose a feature vector based on the distribution of local bilateral symmetry in textured images. This feature is more effective in classifying a uniform texture versus a non-uniform texture. The feature when used with a texton-based feature improves the classification rate and is tested on 4 texture datasets. Secondly, we also present a global clustering of texture based on symmetry

    MATRIOSKA: A Multi-level Approach to Fast Tracking by Learning

    No full text
    In this paper we propose a novel framework for the detection and tracking in real-time of unknown object in a video stream. We decompose the problem into two separate modules: detection and learning. The detection module can use multiple keypoint-based methods (ORB, FREAK, BRISK, SIFT, SURF and more) inside a fallback model, to correctly localize the object frame by frame exploiting the strengths of each method. The learning module updates the object model, with a growing and pruning approach, to account for changes in its appearance and extracts negative samples to further improve the detector performance. To show the effectiveness of the proposed tracking-by-detection algorithm, we present quantitative results on a number of challenging sequences where the target object goes through changes of pose, scale and illumination

    A fusion-based approach to digital movie restoration

    No full text
    Many algorithms have been proposed in literature for digital movie restoration; unfortunately, none of them ensures a perfect result which ever is the image sequence to be restored. Here we propose a new digital scratch restoration algorithm which achieves accuracy results higher than that of already existing algorithms and naturally adapts for implementation in to high-performance computing environments. The basic idea of the proposed algorithm is to adopt several relatively well-settled algorithms for the problem at hand and combine obtained results through suitable image fusion techniques, with the aim of taking advantage of the adopted algorithms'capabilities and, at the same time, limiting their deficiencies. Extensive experiments on real image sequences deeply investigate both accuracy results of the presented scratch restoration approach, which is shown to outperform other existing approaches, and performance of its parallel implementation, which allows for real-time restoration

    Parallel processing for image and video processing: Issues and challenges

    No full text
    Some meaningful hints about parallelization problems in image processing and analysis are discussed. The issues of the operation of various architectures used to solve vision problems, from the pipeline of dedicated operators to general purpose MIMD machines, passing through specialized SIMD machines and processors with extended instruction sets, and parallelization tools, from parallel library to parallel programming languages, are reviewed. In this context, a discussion of open issues and directions for future research is provided

    Guest Editorial on Decision Making in Human and Machine Vision

    No full text
    Granular information processing is one of the human-inspired problem-solving aspects of natural computing, as information abstraction is inherent in human thinking and reasoning processes, and plays an essential role in human cognition. Among the different facets of natural computing fuzzy sets, rough sets and their hybridization are well accepted paradigms that are based on the construction, representation and interpretation of granules, as well as the utilization of granules for problem solving. These tools are also known as primary constituents of soft computing whose objective is to provide flexible information processing capability for handling real-life ambiguous situations. They have been successfully employed in various image processing tasks, including image segmentation, enhancement and classification, both individually or in combination with other computing techniques. The reason of such success is rooted to the fact that they provide powerful tools to describe uncertainty, naturally embedded in images, which can be exploited in various image processing tasks
    • …
    corecore